Unsupervised Domain Adaptation with Differentially Private Gradient Projection
نویسندگان
چکیده
Domain adaptation is a viable solution for deep learning with small data. However, domain models trained on data sensitive information may be violation of personal privacy. In this article, we proposed unsupervised adaptation, called DP-CUDA, which based differentially private gradient projection and contradistinguisher. Compared the traditional process, DP-CUDA involves searching domain-invariant features between source target first then transferring knowledge. Specifically, model in by supervised from labeled During training model, feature used to solve classification task an end-to-end manner using unlabeled directly, noise injected into gradient. We conducted extensive experiments variety benchmark datasets, including MNIST, USPS, SVHN, VisDA-2017, Office-31, Amazon Review, demonstrate our method’s utility privacy-preserving properties.
منابع مشابه
Unsupervised Transductive Domain Adaptation
Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift between the training and the test data distribution. In this regard, unsupervised domain adaptation algorithms have been proposed to directly address t...
متن کاملRobust Decentralized Differentially Private Stochastic Gradient Descent
Stochastic gradient descent (SGD) is one of the most applied machine learning algorithms in unreliable large-scale decentralized environments. In this type of environment data privacy is a fundamental concern. The most popular way to investigate this topic is based on the framework of differential privacy. However, many important implementation details and the performance of differentially priv...
متن کاملUnsupervised Domain Adaptation with Feature Embeddings
Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches often require the specification of “pivot features” that generalize across domains, which are selected by task-specific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature template structure common in NLP problems.
متن کاملUnsupervised Domain Adaptation with Residual Transfer Networks
The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source doma...
متن کاملUnsupervised Multi-Domain Adaptation with Feature Embeddings
Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches have two major weaknesses. First, they often require the specification of “pivot features” that generalize across domains, which are selected by taskspecific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature tem...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Intelligent Systems
سال: 2023
ISSN: ['1098-111X', '0884-8173']
DOI: https://doi.org/10.1155/2023/8426839